Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
ACM International Conference Proceeding Series ; : 311-317, 2022.
Article in English | Scopus | ID: covidwho-20232081

ABSTRACT

The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology's capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies. © 2022 ACM.

2.
2023 International Conference on Intelligent Systems for Communication, IoT and Security, ICISCoIS 2023 ; : 157-161, 2023.
Article in English | Scopus | ID: covidwho-2327239

ABSTRACT

This project aims to devise an alternative for Coronavirus detection using various audio signals. The aim is to create a machine-learning model assisted by speech processing techniques that can be trained to distinguish symptomatic and asymptomatic Coronavirus cases. Here the features exclusive to the vocal cord of a person is used for covid detection. The procedure is to train the classifier using a data set containing data of people of various ages both infected and disease-free, including patients with comorbidities. We presented a machine learning-based Coronavirus classifier model that can separate Coronavirus positive or negative patients from cough, breathing, and speech recordings. The model was trained and evaluated using several machine learning classifiers such as Random Forest Classifier, Logistic Regression (LR), Decision Tree Classifier, k-nearest Neighbour (KNN), Naive Bayes Classifier, Linear Discriminant Analysis, and a neural network. This project helps track COVID-19 patients at a low cost using a non-contactable procedure and reduces the workload on testing centers. © 2023 IEEE.

3.
Biomed Signal Process Control ; : 105026, 2023 May 15.
Article in English | MEDLINE | ID: covidwho-2312740

ABSTRACT

Since the year 2019, the entire world has been facing the most hazardous and contagious disease as Corona Virus Disease 2019 (COVID-19). Based on the symptoms, the virus can be identified and diagnosed. Amongst, cough is the primary syndrome to detect COVID-19. Existing method requires a long processing time. Early screening and detection is a complex task. To surmount the research drawbacks, a novel ensemble-based deep learning model is designed on heuristic development. The prime intention of the designed work is to detect COVID-19 disease using cough audio signals. At the initial stage, the source signals are fetched and undergo for signal decomposition phase by Empirical Mean Curve Decomposition (EMCD). Consequently, the decomposed signal is called "Mel Frequency Cepstral Coefficients (MFCC), spectral features, and statistical features". Further, all three features are fused and provide the optimal weighted features with the optimal weight value with the help of "Modified Cat and Mouse Based Optimizer (MCMBO)". Lastly, the optimal weighted features are fed as input to the Optimized Deep Ensemble Classifier (ODEC) that is fused together with various classifiers such as "Radial Basis Function (RBF), Long-Short Term Memory (LSTM), and Deep Neural Network (DNN)". In order to attain the best detection results, the parameters in ODEC are optimized by the MCMBO algorithm. Throughout the validation, the designed method attains 96% and 92% concerning accuracy and precision. Thus, result analysis elucidates that the proposed work achieves the desired detective value that aids practitioners to early diagnose COVID-19 ailments.

4.
Digit Commun Netw ; 2022 Nov 14.
Article in English | MEDLINE | ID: covidwho-2320654

ABSTRACT

The COVID-19 pandemic has imposed new challenges on the healthcare industry as hospital staff are exposed to a massive coronavirus load when registering new patients, taking temperatures, and providing care. The Ebola epidemic of 2014 is another example of a pandemic which a hospital in New York decided to use an audio-based communication system to protect nurses. This idea quickly turned into an Internet of Things (IoT) healthcare solution to help to communicate with patients remotely. However, it has grabbed the attention of criminals who use this medium as a cover for secret communication. The merging of signal processing and machine-learning techniques has led to the development of steganalyzers with very higher efficiencies, but since the statistical properties of normal audio files differ from those of purely speech audio files, the current steganalysis practices are not efficient enough for this type of content. This research considers the Percent of Equal Adjacent Samples (PEAS) feature for speech steganalysis. This feature efficiently discriminates the least significant bit stego speech samples from clean ones with a single analysis dimension. A sensitivity of 99.82% was achieved for the steganalysis of 50% embedded stego instances using a classifier based on the Gaussian membership function.

5.
2022 International Conference on Data Science, Agents and Artificial Intelligence, ICDSAAI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2261650

ABSTRACT

Clinicians have long used audio signals created by the human body as indications to diagnose sickness or track disease progression. Preliminary research indicates promise in detecting COVID-19 from voice and coughing acoustic signals. In this paper, various popular convolutional neural networks (CNN) are employed to detect COVID-19 from cough sounds available in the Coughvid opensource dataset. The CNN models are given input in the form of hand-crafted features or raw signals represented using spectrograms. The CNN architectures for both the types of inputs has been optimized to enhance performance. COVID-19 could be detected from cough sounds with an accuracy of 77.5% using CNN on handcrafted features, and 72.5% using VGG16 on spectrograms. However, result show that the concatenation of the two in a multi-head deep neural network yield higher accuracy as compared to just using hand-extracted features or spectrograms of raw signals as input. The classification improved to 81.25% when ResNet50 was employed in the multi-head deep neural network, which was higher than that obtained with VGG16 and MobileNet. © 2022 IEEE.

6.
51st International Congress and Exposition on Noise Control Engineering, Internoise 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2259964

ABSTRACT

Wearing face masks (alongside physical distancing) provides some protection against COVID-19. Face masks can also change how people communicate and subsequently affect speech signal quality. This study investigated how two common face mask types affect acoustic analysis of speech perception. Quantitative and qualitative assessments were carried out in terms of measuring the sound pressure levels and playing back to a group of people. The responses gauged proved that masks alter the speech signal with downstream effects on speech intelligibility of a speaker. Masks muffle speech sounds at higher frequencies and hence the acoustic effect of a speaker wearing a face mask is equivalent to the listener having slight high frequency hearing loss. When asked on the perception of audibility, over 83% of the participants were able to clearly hear the no mask audio clip, however, 41% of the participants thought it was moderately audible with N95 and face shield masks. Due to no visual access, face masks act as communication barriers with 50% of the people finding to understand people because they could not read their lips. Nevertheless, based on these findings it is reasonable to hypothesize that wearing a mask would attenuate speech spectra at similar frequency bands. © 2022 Internoise 2022 - 51st International Congress and Exposition on Noise Control Engineering. All rights reserved.

7.
14th International Conference on Social Robotics, ICSR 2022 ; 13818 LNAI:217-227, 2022.
Article in English | Scopus | ID: covidwho-2257940

ABSTRACT

In this paper, we present the development of a novel autonomous social robot deep learning architecture capable of real-time COVID-19 screening during human-robot interactions. The architecture allows for autonomous preliminary multi-modal COVID-19 detection of cough and breathing symptoms using a VGG16 deep learning framework. We train and validate our VGG16 network using existing COVID datasets. We then perform real-time non-contact preliminary COVID-19 screening experiments with the Pepper robot. The results for our deep learning architecture demonstrate: 1) an average computation time of 4.57 s for detection, and 2) an accuracy of 84.4% with respect to self-reported COVID symptoms. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

8.
2022 IEEE Global Communications Conference, GLOBECOM 2022 ; : 5510-5515, 2022.
Article in English | Scopus | ID: covidwho-2228774

ABSTRACT

Digital Contact Tracing (DCT) has been proposed to limit the spread of COVID-19, allowing for targeted quarantine of close contacts. The protocol is designed to be lightweight, broad-casting limited-time tokens over Bluetooth Low Energy (BLE) beacons, allowing receivers to record contacts pseudonymously. However, currently proposed protocols have vulnerabilities that permit an adversary to perform massive surveillance or cause significant numbers of false-positive alerts. In this paper, we present AcousticMask, which encrypts broadcast messages using a key derived from the audio signal present at each device with sufficient security levels. Our results show that a receiver sharing the same social space as a sender will hear all of the sender's ephemeral IDs (EphIDs) with Hamming distance at most 3, which can be decrypted at the rate of 10 Hz on a Raspberry Pi 4, while achieving a security factor of over 2108against attackers in our testing set, showing AcousticMask is lightweight for DCT and provides sufficient security levels to protect user's privacy. © 2022 IEEE.

9.
1st International Conference on Intelligent Systems and Applications, ICISA 2022 ; 959:333-350, 2023.
Article in English | Scopus | ID: covidwho-2219931

ABSTRACT

The variants of coronavirus both delta and omicron are much more contagious and affecting greater percentage of human population. In this research, an attempt is made to predict classification of clinical emergency treatment of corona variant infected patients using their recorded cough sound file. Cough audio signal features such as zero crossing and mel-frequency cepstral coefficients (MFCC), chromo gram (chroma_stft), spectral centroid, spectral roll off, spectral-bandwidth are to be extracted and stored along with patient ID, date, and timings. Digital signal processing of recorded cough audio file obtained needs to be cleaned and pre-processed and normalized to get a training dataset in order to build intelligent ML model using multiclass classifier SVM for predicting the class labels with maximum accuracy. The model proposed in this research paper helps to systematically plan and handle emergency treatment of the patients by classifying their severity based on the cough audio signal using SVM. The built model predicts and classifies the emergency treatment level as low, medium, and high with 96% accuracy. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

10.
9th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2213393

ABSTRACT

The COVID-19 pandemic bestows global challenges surpassing boundaries of country, religion race, and economy. Testing of COVID-19 patients conditions are remains a challenging task due to the lack of adequate medical supplies, well-trained personnel and conducting reverse transcription polymerase chain reaction (RT-PCR) testing is expensive, long-drown-out process violates social distancing. In this direction, we used microbiologically confirmed COVID-19 dataset based on cough recordings from Coswara dataset. The Coswara dataset is also one of the open challenge dataset for researchers to investigate sound recordings of the Coswara dataset, collected from COVID-19 infected and non-COVID-19 individuals, for classification between Positive and Negative detection. These COVID-19 recordings were collected from multiple countries, through the provided crowd-sourcing website. Here, our work mainly focuses on cough sound based recordings. The dataset is released open access. We developed an acoustic biosignature feature extractors to screen for potential problems from cough recordings, and provide personalized advice to a particular patient's state to monitor his suitable condition in real-time. In our work, cough sound recordings are converted into Mel Frequency Cepstral Coefficients (MFCCs) and passed through a Gaussian Mixture Model (GMM) based pattern recognition, decision making based on a binary pre-screening diagnostic. When validated with infected and non-infected patients, for a two-class classification, using a Coswara dataset. The GMM is applied for developing a model for detection of biomarker based detection and achieves COVID-19 and non-COVID-19 patients accuracy of 73.22% based on the Coswara dataset and also compared with existing classifiers. © 2022 IEEE.

11.
5th International Conference on Computer Science and Software Engineering, CSSE 2022 ; : 522-526, 2022.
Article in English | Scopus | ID: covidwho-2194137

ABSTRACT

The severe acute respiratory syndrome coronavirus 2 is a novel type of coronavirus that causes COVID-19. The COVID-19 virus has recently infected more than 590 million individuals, resulting in a global pandemic. Traditional diagnosis methods are no longer effective due to the exponential rise in infection rates. Quick and accurate COVID-19 diagnosis is made possible by machine learning (ML), which also assuages the burden on healthcare systems. After the effective utilization of Cough Audio Signal Classification in diagnosing a number of respiratory illnesses, there has been significant interest in using ML to enable universal COVID-19 screening. The purpose of the current study is to determine people's COVID-19 status through machine learning algorithms. We have developed a Random Forest based model and achieved an accuracy of 0.873 on the COUGHVID dataset, demonstrates the potential of using audio signals as a cheap, accessible, and accurate COVID-19 screening tool. © 2022 ACM.

12.
14th International Joint Conference on Computational Intelligence, IJCCI 2022 ; 2022-October:367-374, 2022.
Article in English | Scopus | ID: covidwho-2168271

ABSTRACT

The importance of remote voice communication has greatly increased during the COVID-19 pandemic. With that comes the problem of degraded speech quality because of background noise. While there can be many unwanted background sounds, this work focuses on dynamically suppressing keyboard sounds in speech signals by utilizing artificial neural networks. Based on the Mel spectrograms as inputs, the neural networks are trained to predict how much power of a frequency inside a time window has to be removed to suppress the keyboard sound. For that goal, we have generated audio signals combined from samples of two publicly available datasets with speaker and keyboard noise recordings. Additionally, we compare three network architectures with different parameter settings as well as an open-source tool RNNoise. The results from the experiments described in this paper show that artificial neural networks can be successfully applied to remove complex background noise from speech signals. Copyright © 2022 by SCITEPRESS - Science and Technology Publications, Lda.

13.
2022 IEEE-EMBS International Conference on Biomedical and Health Informatics, BHI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2161378

ABSTRACT

Detecting COVID-19 from audio signals, such as breathing and coughing, can be used as a fast and efficient pre-testing method to reduce the virus transmission. Due to the promising results of deep learning networks in modelling time sequences, we present a temporal-oriented broadcasting residual learning method that achieves efficient computation and high accuracy with a small model size. Based on the EfficientNet architecture, our novel network, named Temporaloriented ResNet (TorNet), constitutes of a broadcasting learning block. The network obtains useful audio-temporal features and higher level embeddings effectively with much less computation than Recurrent Neural Networks (RNNs), typically used to model temporal information. TorNet achieves 72.2% Unweighted Average Recall (UAR) on the INTERPSEECH 2021 Computational Paralinguistics Challenge COVID-19 cough Sub-Challenge, by this showing competitive results with a higher computational efficiency than other state-of-the-art alternatives. © 2022 IEEE.

14.
2nd IEEE International Conference on Intelligent Technologies, CONIT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2029216

ABSTRACT

The fourth industrial revolution mostly revolves around new techniques and concepts such as artificial intelligence (AI), machine learning (ML), internet of things (IoT), etc. The recent spurt of corona virus has wreaked havoc across the globe and led to huge loss of human lives. An intelligent system with innovative technologies can be implemented to address the rapid spread of the deadly virus. In this paper, we present our patented [1] idea of Smart Face Shield (SMAFS) that can not only help to maintain appropriate social distancing in a crowded place but also to identify a person with preliminary symptoms of corona virus.SMAFS is designed as a technically improved face shield to maintain social distancing by appropriate use of proximity sensor and to measure temperature of the wearer by using contact temperature sensor. LED's and buzzer are placed strategically to alert people via visual and audio signals respectively. Such precautionary detection and proximity alert prototype can prove instrumental in early diagnosis and isolation aiding in crowd management and free movement in places of social gathering. © 2022 IEEE.

15.
Multimed Tools Appl ; 81(23): 33569-33589, 2022.
Article in English | MEDLINE | ID: covidwho-1942439

ABSTRACT

The first step to reducing the effect of viral disease is to prevent the spread which could be achieved by implementing social distancing (reducing the number of close physical interactions between peoples). Almost every viral disease whose means of communication is air, and enters through mouth or nose, definitely will affect our vocal organs which cause changes in features of our voice and could be traceable using feature analysis of voice using deep learning. The detection of an affected person using deep neural networks and tracking him would help us in the implementation of the social distancing rule in an area where it is needed. The aim of this paper is to study different solutions which help in enabling, encouraging, and even enforcing social distancing. In this paper, we implemented and analyzed scenarios on the basis of COVID-19 patient detection using cough and tracking him using smart cameras, or emerging wireless technologies with deep learning techniques for prediction and preventing the spread of disease. Thus these techniques are easy to be implemented in the initial stage of any pandemic as well and will help us in the implementation of smart social distancing (apply whenever needed).

16.
9th International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2022 ; 13258 LNCS:114-124, 2022.
Article in English | Scopus | ID: covidwho-1899007

ABSTRACT

Estimating the capacity of a room or venue is essential to avoid overcrowding that could compromise people’s safety. Having enough free space to guarantee a minimal safety distance between people is also essential for health reasons, as in the current COVID-19 pandemic. Already existing systems for automatic crowd counting are mostly based on image or video data, and some of them, using deep learning architectures. In this paper, we study the viability of already existing Deep Learning Crowd Counting systems and propose new alternatives based on new network architectures containing convolutional layers, exclusively based on the use of environmental audio signals. The proposed architecture is able to infer the actual capacity with a higher accuracy in comparison to previous proposals. Consequently, conclusions from the accuracy obtained with out approach are drawn and the possible scope of deep learning based crowd counting systems is discussed. © 2022, Springer Nature Switzerland AG.

17.
18th IEEE India Council International Conference, INDICON 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1752416

ABSTRACT

The inability to test at scale has become humanity's Achilles' heel in the ongoing war against the COVID-19 pandemic. Therefore, various intelligent diagnostic approaches have been proposed in the literature to fight against this pandemic situation. However, in this paper, an Artificial Intelligence (AI) powered automatic screening solution has been proposed for rapid and accurate diagnosis of COVID-19. The proposed system analyzes the audio signals of human being using modified DenseNet121 to detect COVID-19 cases accurately. The proposed methodology has been applied on a publicly available benchmark dataset known as Coswara [1]. Experimental results demonstrate the efficacy of the proposed system in terms of blind test accuracy. © 2021 IEEE.

18.
11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, IDAACS 2021 ; 2:1016-1021, 2021.
Article in English | Scopus | ID: covidwho-1702068

ABSTRACT

As the deadly COVID-19 outbreak spreads across the globe, the utilization of IoT in the surveillance of patients can prevent us from facing catastrophic repercussions. This paper aims to develop a real-time health monitoring system where sensors are used to continuously observe the patient's body temperature, heart rate, and oxygen level. A comparison of two CNN architectures, VGG19 and DenseNet was also undertaken for audio signal processing, with VGG19 offering more promising accuracy in identifying coughing. Additionally, as severe coughing can be an alarm for lung diseases, the system identifies the number of consecutive coughing of a patient, as well as the timestamp for it. Moreover, if a patient feels infirm, they can seek assistance from a nearby doctor or nurse through Google's Speech-to-Text API. The data is then transmitted to a centralized database, where clinicians can monitor patients' symptoms in real-time by extracting the data via a web application. © 2021 IEEE.

19.
14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1672582

ABSTRACT

Cough is a common symptom of respiratory and lung diseases. Cough detection is important to prevent, assess and control epidemic, such as COVID-19. This paper proposes a model to detect cough events from cough audio signals. The models are trained by the dataset combined ESC-50 dataset with self-recorded cough recordings. The test dataset contains inpatient cough recordings collected from inpatients of the respiratory disease department in Ruijin Hospital. We totally build 15 cough detection models based on different feature numbers selected by Random Frog, Uninformative Variable Elimination (UVE), and Variable influence on projection (VIP) algorithms respectively. The optimal model is based on 20 features selected from Mel Frequency Cepstral Coefficients (MFCC) features by UVE algorithm and classified with Support Vector Machine (SVM) linear two-class classifier. The best cough detection model realizes the accuracy, recall, precision and F1-score with 94.9%, 97.1%, 93.1% and 0.95 respectively. Its excellent performance with fewer dimensionality of the feature vector shows the potential of being applied to mobile devices, such as smartphones, thus making cough detection remote and non-contact. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL